16 research outputs found

    NEW SOURCE OF GEOSPATIAL DATA: CROWDSENSING BY ASSISTED AND AUTONOMOUS VEHICLE TECHNOLOGIES

    Get PDF
    The ongoing proliferation of remote sensing technologies in the consumer market has been rapidly reshaping the geospatial data acquisition world, and subsequently, the data processing as well as information dissemination processes. Smartphones have clearly established themselves as the primary crowdsourced data generators recently, and provide an incredible volume of remote sensed data with fairly good georeferencing. Besides the potential to map the environment of the smartphone users, they provide information to monitor the dynamic content of the object space. For example, real-time traffic monitoring is one of the most known and widely used real-time crowdsensed application, where the smartphones in vehicles jointly contribute to an unprecedentedly accurate traffic flow estimation. Now we are witnessing another milestone to happen, as driverless vehicle technologies will become another major source of crowdsensed data. Due to safety concerns, the requirements for sensing are higher, as the vehicles should sense other vehicles and the road infrastructure under any condition, not just daylight in favorable weather conditions, and at very fast speed. Furthermore, the sensing is based on using redundant and complementary sensor streams to achieve a robust object space reconstruction, needed to avoid collisions and maintain normal travel patterns. At this point, the remote sensed data in assisted and autonomous vehicles are discarded, or partially recorded for R&D purposes. However, in the long run, as vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies mature, recording data will become a common place, and will provide an excellent source of geospatial information for road mapping, traffic monitoring, etc. This paper reviews the key characteristics of crowdsourced vehicle data based on experimental data, and then the processing aspects, including the Data Science and Deep Learning components

    Experimental evaluation of a UWB-based cooperative positioning system for pedestrians in GNSS-denied environment

    Get PDF
    Cooperative positioning (CP) utilises information sharing among multiple nodes to enable positioning in Global Navigation Satellite System (GNSS)-denied environments. This paper reports the performance of a CP system for pedestrians using Ultra-Wide Band (UWB) technology in GNSS-denied environments. This data set was collected as part of a benchmarking measurement campaign carried out at the Ohio State University in October 2017. Pedestrians were equipped with a variety of sensors, including two different UWB systems, on a specially designed helmet serving as a mobile multi-sensor platform for CP. Different users were walking in stop-and-go mode along trajectories with predefined checkpoints and under various challenging environments. In the developed CP network, both Peer-to-Infrastructure (P2I) and Peer-to-Peer (P2P) measurements are used for positioning of the pedestrians. It is realised that the proposed system can achieve decimetre-level accuracies (on average, around 20 cm) in the complete absence of GNSS signals, provided that the measurements from infrastructure nodes are available and the network geometry is good. In the absence of these good conditions, the results show that the average accuracy degrades to meter level. Further, it is experimentally demonstrated that inclusion of P2P cooperative range observations further enhances the positioning accuracy and, in extreme cases when only one infrastructure measurement is available, P2P CP may reduce positioning errors by up to 95%. The complete test setup, the methodology for development, and data collection are discussed in this paper. In the next version of this system, additional observations such as the Wi-Fi, camera, and other signals of opportunity will be included

    SEMANTIC LABELING OF STRUCTURAL ELEMENTS IN BUILDINGS BY FUSING RGB AND DEPTH IMAGES IN AN ENCODER-DECODER CNN FRAMEWORK

    Get PDF
    In the last decade, we have observed an increasing demand for indoor scene modeling in various applications, such as mobility inside buildings, emergency and rescue operations, and maintenance. Automatically distinguishing between structural elements of buildings, such as walls, ceilings, floors, windows, doors etc., and typical objects in buildings, such as chairs, tables and shelves, is particularly important for many reasons, such as 3D building modeling or navigation. This information can be generally retrieved through semantic labeling. In the past few years, convolutional neural networks (CNN) have become the preferred method for semantic labeling. Furthermore, there is ongoing research on fusing RGB and depth images in CNN frameworks. For pixel-level labeling, encoder-decoder CNN frameworks have been shown to be the most effective. In this study, we adopt an encoder-decoder CNN architecture to label structural elements in buildings and investigate the influence of using depth information on the detection of typical objects in buildings. For this purpose, we have introduced an approach to combine depth map with RGB images by changing the color space of the original image to HSV and then substitute the V channel with the depth information (D) and use it utilize it in the CNN architecture. As further variation of this approach, we also transform back the HSD images to RGB color space and use them within the CNN. This approach allows for using a CNN, designed for three-channel image input, and directly comparing our results with RGB-based labeling within the same network. We perform our tests using the Stanford 2D-3D-Semantics Dataset (2D-3D-S), a widely used indoor dataset. Furthermore, we compare our approach with results when using four-channel input created by stacking RGB and depth (RGBD). Our investigation shows that fusing RGB and depth improves results on semantic labeling; particularly, on structural elements of buildings. On the 2D- 3D-S dataset, we achieve up to 92.1 % global accuracy, compared to 90.9 % using RGB only and 93.6 % using RGBD. Moreover, the scores of Intersection over Union metric have improved using depth, which shows that it gives better labeling results at the boundaries

    Indoor Ultra-Wide Band Network Adjustment using Maximum Likelihood Estimation

    No full text
    This study is the part of our ongoing research on using ultra-wide band (UWB) technology for navigation at the Ohio State University. Our tests have indicated that the UWB two-way time-of-flight ranges under indoor circumstances follow a Gaussian mixture distribution that may be caused by the incompleteness of the functional model. In this case, to adjust the UWB network from the observed ranges, the maximum likelihood estimation (MLE) may provide a better solution for the node coordinates than the widely-used least squares approach. The prerequisite of the maximum likelihood method is to know the probability density functions. The 30 Hz sampling rate of the UWB sensors enables to estimate these functions between each node from the samples in static positioning mode. In order to prove the MLE hypothesis, an UWB network has been established in a multi-path density environment for test data acquisition. The least squares and maximum likelihood coordinate solutions are determined and compared, and the results indicate that better accuracy can be achieved with maximum likelihood estimation

    EXPERIENCES WITH ACQUIRING HIGHLY REDUNDANT SPATIAL DATA TO SUPPORT DRIVERLESS VEHICLE TECHNOLOGIES

    No full text
    As vehicle technology is moving towards higher autonomy, the demand for highly accurate geospatial data is rapidly increasing, as accurate maps have a huge potential of increasing safety. In particular, high definition 3D maps, including road topography and infrastructure, as well as city models along the transportation corridors represent the necessary support for driverless vehicles. In this effort, a vehicle equipped with high-, medium- and low-resolution active and passive cameras acquired data in a typical traffic environment, represented here by the OSU campus, where GPS/GNSS data are available along with other navigation sensor data streams. The data streams can be used for two purposes. First, high-definition 3D maps can be created by integrating all the sensory data, and Data Analytics/Big Data methods can be tested for automatic object space reconstruction. Second, the data streams can support algorithmic research for driverless vehicle technologies, including object avoidance, navigation/positioning, detecting pedestrians and bicyclists, etc. Crucial cross-performance analyses on map database resolution and accuracy with respect to sensor performance metrics to achieve economic solution for accurate driverless vehicle positioning can be derived. These, in turn, could provide essential information on optimizing the choice of geospatial map databases and sensors’ quality to support driverless vehicle technologies. The paper reviews the data acquisition and primary data processing challenges and performance results

    ESTIMATING AIRCRAFT HEADING BASED ON LASERSCANNER DERIVED POINT CLOUDS

    No full text
    Using LiDAR sensors for tracking and monitoring an operating aircraft is a new application. In this paper, we present data processing methods to estimate the heading of a taxiing aircraft using laser point clouds. During the data acquisition, a Velodyne HDL-32E laser scanner tracked a moving Cessna 172 airplane. The point clouds captured at different times were used for heading estimation. After addressing the problem and specifying the equation of motion to reconstruct the aircraft point cloud from the consecutive scans, three methods are investigated here. The first requires a reference model to estimate the relative angle from the captured data by fitting different cross-sections (horizontal profiles). In the second approach, iterative closest point (ICP) method is used between the consecutive point clouds to determine the horizontal translation of the captured aircraft body. Regarding the ICP, three different versions were compared, namely, the ordinary 3D, 3-DoF 3D and 2-DoF 3D ICP. It was found that 2-DoF 3D ICP provides the best performance. Finally, the last algorithm searches for the unknown heading and velocity parameters by minimizing the volume of the reconstructed plane. The three methods were compared using three test datatypes which are distinguished by object-sensor distance, heading and velocity. We found that the ICP algorithm fails at long distances and when the aircraft motion direction perpendicular to the scan plane, but the first and the third methods give robust and accurate results at 40m object distance and at ~12 knots for a small Cessna airplane

    Evaluation of a mobile multi-sensor system for seamless outdoor and indoor mapping

    No full text
    Indoor mapping has been gaining importance recently. One of the main applications of indoor maps is personal navigation. For this application, the connection to the outdoor map is very important, as users typically enter the building from outside and navigate to their destination inside. Obtaining this connection, however, is challenging, as the georeferencing of indoor maps is difficult due to the weak or total lack of GPS signal which makes positioning impossible in general. One solution for this problem could be matching indoor and outdoor datasets. Unfortunately, this is difficult due to the very low or non-existing overlap between the indoor and outdoor datasets as well as the differences in different. To overcome this problem, we propose a mobile mapping system, which can seamlessly capture the outdoor and indoor scene. Our prototype system contains three laser scanners, six RGB cameras, two GPS receivers and one IMU. In this paper, we propose an approach to seamlessly map a building and define the requirements for the mapping system. We primarily describe the construction phase of this system. Finally, we evaluate the performance of our mapping system with regard to the defined requirements

    MONITORING AIRCRAFT MOTION AT AIRPORTS BY LIDAR

    No full text
    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment

    GEOREFERENCING IN GNSS-CHALLENGED ENVIRONMENT: INTEGRATING UWB AND IMU TECHNOLOGIES

    No full text
    Acquiring geospatial data in GNSS compromised environments remains a problem in mapping and positioning in general. Urban canyons, heavily vegetated areas, indoor environments represent different levels of GNSS signal availability from weak to no signal reception. Even outdoors, with multiple GNSS systems, with an ever-increasing number of satellites, there are many situations with limited or no access to GNSS signals. Independent navigation sensors, such as IMU can provide high-data rate information but their initial accuracy degrades quickly, as the measurement data drift over time unless positioning fixes are provided from another source. At The Ohio State University’s Satellite Positioning and Inertial Navigation (SPIN) Laboratory, as one feasible solution, Ultra- Wideband (UWB) radio units are used to aid positioning and navigating in GNSS compromised environments, including indoor and outdoor scenarios. Here we report about experiences obtained with georeferencing a pushcart based sensor system under canopied areas. The positioning system is based on UWB and IMU sensor integration, and provides sensor platform orientation for an electromagnetic inference (EMI) sensor. Performance evaluation results are provided for various test scenarios, confirming acceptable results for applications where high accuracy is not required

    TRACKING VEHICLE IN GSM NETWORK TO SUPPORT INTELLIGENT TRANSPORTATION SYSTEMS

    No full text
    The penetration of GSM capable devices is very high, especially in Europe. To exploit the potential of turning these mobile devices into dynamic data acquisition nodes that provides valuable data for Intelligent Transportation Systems (ITS), position information is needed. The paper describes the basic operation principles of the GSM system and provides an overview on the existing methods for deriving location data in the network. A novel positioning solution is presented that rely on handover (HO) zone measurements; the zone geometry properties are also discussed. A new concept of HO zone sequence recognition is introduced that involves application of Probabilistic Deterministic Finite State Automata (PDFA). Both the potential commercial applications and the use of the derived position data in ITS is discussed for tracking vehicles and monitoring traffic flow. As a practical cutting edge example, the integration possibility of the technology in the SafeTRIP platform (developed in an EC FP7 project) is presented
    corecore